Category: Geeks r Us
First, we had SATA 1.5 gigabits per second technology.
This was just a slight step over the pata133, quite unremarkable. Yet many notebooks used it, such as dell.
But it took quite some time before it even got recognition. The primary benefit was the release from slave and master configurations.
Then SATA II or SATA 3 gigabits per second. This allowed 7200/10k rpm hds to finally meet its potential in raw throughput. This meant that you can transfer about 270 mega bytes per second if done over a SATA connection.
NOw SATA 6.0 is being introduced and after its bugs are ironed out, one could potentially transfer about 560 mega bytes per second.
Why is this even important?
SSDs have the potential to burst upwards to about 700 mega bytes or so per second and once the controlers are improves both from jmicron and intel, sustained transfer speed can reach about 600 mega bytes per second. Which means, that SSDs will blow any platter based hds out of the water in performance and throughput, not that it doesn't already, but even more so.
Motherboards with the linfield CPUs should see native SATA 6.0 once novelle overcomes a major bug with the pata connection conflictions with the south bridge.
This SATA 6.0 should also permit gigabit networks to travel at their full potential, illiminating
the bottleneck at the hd end.
your making me dribble. Did you see the marketing video that was going around a few months ago where samsung raided 24 ssd's? Craaaaaazy speeds.
hahah, now if ssds were about 10 cents on the gig, then perhaps we can get 10+ and raid 0 or 5 it.
That would be the plan but first they have to go down in price and up on capacity.
SSDs have come down quite a bit, saw a 64gb for $99 regularly now, last year they were in the 200 plus, still think it'll be 3 to 5 yearsw before we start seeing prices comparible with the HDs. Also I thought SSDs still sufferred from the fact you can only write to them and read from them x number of times before they start failing. Of course it is a pretty damn large number and HDs fail too with alarming regularity, I've had 4 hard drives crash on me in 20 years of computer usage and while it is only one every 5 years it's a bloody annoying thing to happen.
"Also I thought SSDs still sufferred from the fact you can only write to them and read from them x number of times
before they start failing."
This was due to the fact that flash drives were written to in an orderly fashion. They solved that issue by introducing random writes throughout the cells so that not one cell gets rewritten over so many times. Take for example, a 100gb ssd. Each cell might contain 10 cells of 10gb each. If you download an mp3, it might be written on to the 3rd cell. Once you delete it and download another, it might be written on the 5th, etc. Also, I understand that within each cell, a piece of data is not necessarily in the same area of that particular space, if overwritten. This way, as long as the data remains whole, the integrity of the ssd is prevented from suffering cell degradation. I explain a major fault in ssds in my post of "ssds, is it worth going cheaper" topic. Which at first blush sounds very similar to cell degradation, but is significantly different.